home *** CD-ROM | disk | FTP | other *** search
-
- On Jun 14, 2:02pm, jon madison wrote:
- > i'm writing a sound app that wants to play more than
- > one sound at a time. (it's a drum machine, has different
- > pads.) i want the silly thing to know that if when i press
- > a button to go on & play the next sample, not sit & wait
- > til the queue has finished moving the samples to the audioport.
- > i can't check if this is so *before* i open the port...
- >
- > is this the time when i should follow the instructions on:
- >
- > A) Using non-blocking i/o
- > B) doing that sproc() stuff
- >
- > or all of the above?
-
- Probably B, and some other things. Here's a general architecture that works
- well for low-latency interactive apps:
-
- Have two processes, a UI process, and an audio process. The latter runs at
- high priority. It spends most of its time blocked so it doesn't eat the
- CPU. Use sproc() for your second process so the two share memory.
-
- The audio process sits in a loop that looks like:
- while(1) {
- block until one of two things happens:
- (a) we get an event to do something from the UI process
- (b) the audio port needs more data
- if (a) {
- get the event
- do what it says
- }
- else if (b) {
- compute the next chunk of data to be sent out the port
- }
- }
-
- The blocking is accomplished using select(), which means you need to
- get file descriptors associated with actions (a) and (b). For (a) the
- best way is a pollable semaphore. If you haven't played with these,
- they're quite cool. See the usnewsema and usnewpollsema man pages. (I'm
- assuming you know how semaphores work in general. If not, they're
- pretty easy to learn, and really useful). You get a semaphore to
- share between processes. A pollable semaphore works like this: you
- initialize the semaphore to 0. The audio process gets a file descriptor
- for the semaphore, upon which it can block. If the audio process does
- a uspsema() and it fails, the audio process can then block on select
- waiting for the semaphore to become ready. When the UI process does a
- usvsema(), the audio process will wake up. So you have the UI process
- do the usvsema when it has something new for the audio process to do. A
- nice clean way to have one process wake another up.
-
- For (b) you use ALgetfd and ALsetfillpoint.
-
- The key to an *interactive* application is that you want low latency.
- What you said: you don't want to sit and wait for all the samples to
- go out the queue. When the user hits a button, he/she wants to hear
- the sound right away.
-
- The problem is, it's a queue. If you've already put stuff in it, you
- have to wait. So here's how you do it. You keep only a couple of
- milliseconds' worth of data in the queue at a time. This way, any new
- data goes out almost immediately. Let's say you want to have a 20
- millisecond latency in your queue. You block until there is only 10
- milliseconds' worth of data in the queue (ALsetfillpoint sets the point
- at which you get unblocked), then you put 10 milliseconds' worth of
- data in. This means your audio app wakes up every 10 milliseconds to
- put another 10ms worth of data in the queue. You are keeping the queue
- filled with a constant amount of data, instead of allowing it to fill up.
- This means your application is in control of the latency. If you want
- lower latency, you maintain less data in your queue. The down-side is
- that your app needs to be scheduled more often. Another little tip:
- if you maintain only a little data in your queue, you don't need a big
- queue; use ALsetqueuesize to shrink your queue so you don't waste system
- memory.
-
- So the remaining tricks are how to generate the data itself. The coolest
- way to do this is to allow multiple simultaneous sounds. If a user hits
- a cymbal pad, it's got a long decay. You don't want it to cut it off when
- he/she hits some other pad. So now your little piece of code to generate
- 10ms worth of data has to remember which sounds are currently playing,
- and where each is in the sound. So you keep a little list. When your
- audio process gets an "event" from the UI to start playing a sound, all
- it does is add that sound to the list, and note that the sound is at
- the beginning. Next time the audio process wakes up to fill the queue,
- it generates 10ms worth of data for ALL the sounds on the list, mixes
- the data together, and advances time by 10ms for all the items on the
- list. Items are removed which reach the end of the sound.
-
- Alternatively, you could use the MIDI library to generate MIDI to some
- internal software synthesizer or external synth.
-
- -Doug
-
-